responsible machine learning
Causal Feature Selection for Responsible Machine Learning
Moraffah, Raha, Sheth, Paras, Vishnubhatla, Saketh, Liu, Huan
Machine Learning (ML) has become an integral aspect of many real-world applications. As a result, the need for responsible machine learning has emerged, focusing on aligning ML models to ethical and social values, while enhancing their reliability and trustworthiness. Responsible ML involves many issues. This survey addresses four main issues: interpretability, fairness, adversarial robustness, and domain generalization. Feature selection plays a pivotal role in the responsible ML tasks. However, building upon statistical correlations between variables can lead to spurious patterns with biases and compromised performance. This survey focuses on the current study of causal feature selection: what it is and how it can reinforce the four aspects of responsible ML. By identifying features with causal impacts on outcomes and distinguishing causality from correlation, causal feature selection is posited as a unique approach to ensuring ML models to be ethically and socially responsible in high-stakes applications.
Towards learning to explain with concept bottleneck models: mitigating information leakage
Lockhart, Joshua, Marchesotti, Nicolas, Magazzeni, Daniele, Veloso, Manuela
Concept bottleneck models perform classification by first predicting which of a list of human provided concepts are true about a datapoint. Then a downstream model uses these predicted concept labels to predict the target label. The predicted concepts act as a rationale for the target prediction. Model trust issues emerge in this paradigm when soft concept labels are used: it has previously been observed that extra information about the data distribution leaks into the concept predictions. In this work we show how Monte-Carlo Dropout can be used to attain soft concept predictions that do not contain leaked information.
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > New York > New York County > New York City (0.04)
A "Glass Box" Approach to Responsible Machine Learning - insideBIGDATA
Machine learning doesn't always have to be an abstruse technology. The multi-parameter and hyper-parameter methodology of complex deep neural networks, for example, is only one type of this cognitive computing manifestation. There are other machine learning varieties (and even some involving deep neural networks) in which the results of models, how they were determined, and which intricacies influenced them, are much more transparent. It all depends on how well organizations understand their data provenance. Comprehending just about everything that happened to training data for models, as well as that for the production data models encounter, is integral to explaining, refining, and improving their results.
ICLR 2022 Workshop on Socially Responsible Machine Learning
We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Unless indicated by the authors, we will provide PDFs of all accepted papers on https://iclrsrml.github.io/. There will be no archival proceedings. We are using CMT3 to manage submissions.
8 Principles For Responsible Machine Learning
Machine Learning and Artificial Intelligence have proven themselves to be one of the most transformative, door-opening inventions in the history of technology and mankind. Due to its transformative nature, Artificial Intelligence is able to provide both positive and negative outcomes. Unless you're in the field, there's a big chunk of the population which is scared of AI and its…
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python
Baniecki, Hubert, Kretowicz, Wojciech, Piatyszek, Piotr, Wisniewski, Jakub, Biecek, Przemyslaw
Their black-box nature leads to opaqueness debt phenomenon inflicting increased risks of discrimination, lack of reproducibility, and deflated performance due to data drift. To manage these risks, good MLOps practices ask for better validation of model performance and fairness, higher explainability, and continuous monitoring. The necessity of deeper model transparency appears not only from scientific and social domains, but also emerging laws and regulations on artificial intelligence. To facilitate the development of responsible machine learning models, we showcase dalex, a Python package which implements the model-agnostic interface for interactive model exploration. It adopts the design crafted through the development of various tools for responsible machine learning; thus, it aims at the unification of the existing solutions. This library's source code and documentation are available under open license at https://python.drwhy.ai/.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Poland > Masovia Province > Warsaw (0.05)
Responsible Machine Learning - Open Source Leader in AI and ML
To reap the full benefits of ML, organizations must also mitigate the considerable risks it presents. This report outlines a set of actionable best practices for people, processes, and technology that can enable organizations to innovate with ML in a responsible manner. Authors Patrick Hall, Navdeep Gill, and Ben Cox focus on the technical issues of ML as well as human-centered issues such as security, fairness, and privacy. The goal is to promote human safety in ML practices so that in the near future, there will be no need to differentiate between the general practice and the responsible practice of ML.
Leading the Charge
Scientists from Intel, Amazon, Facebook and Google joined some of the leading academic minds on artificial intelligence (AI) to discuss the future of machine learning during the inaugural Responsible Machine Learning Summit hosted by UC Santa Barbara. With more than 120 students, faculty, business leaders and invited guests on hand, every speaker and panelist agreed on the importance of establishing an ethical foundation for machine learning, in which a computer uses algorithms and data to make predictions or decisions on its own. "We need to better understand the mutual influence between society and machine learning," said William Wang, a professor of computer science and organizer of the event. "Personally, I'm interested in improving the quality of life by learning the important societal factors and impacts that should be considered when building algorithms, such as fairness, transparency, privacy and accountability." The summit served as the opening for Wang's Center for Responsible Machine Learning.